Human-Centered AI by Noemie Elhadad
HCAI brings together Generative AI, Predictive ML, and Reinforcement Learning.
The AI and allied communities went through (and are going through) some very human stages of unbridled excitement to apprehension about the implications of the technology apropos how it would serve humans1.
There are some commonsensical/obvious challenges with making AI responsible and ethical. In the 2020s, the community's monologue starting including the concept of "Human-Centering" AI (in addition to to purely algorithmically-centering it). The discussion on how to make AI "responsible" reminded me of Isaac Asimov's "Three Laws of Robotics":
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
What is the nature of AI that must give us pause when designing, developing, and deploying tools with it? How do we design these tools?
Alignment
You design AI tools in alignment with our (human) values: privacy, inclusivity ("who's in the room when a chatbot is being developed?2"), justice, safety, beneficience, and so on. An interesting one is autonomy and agency: the AI must not undermine human choice.
How do you encode these values? (1) Explicitly and (2) Implicitly with Reinforcement Learning with Human Feedback (RLHF).
But human beings can kinda suck: What are the values of the coders themselves? What do we do? There's some research on pluralistic alignment so you consider multiple perspectives.
Explainability and Interpretibility
Your LLM/model can spit out all the predictions and explain how it got there but research has shown that experts (like doctors) will just ignore them. What does help is engagement with the expert and providing them actionable insight.
Prescriptive AI
Brief discussion on the prof's own project called Phendo, Reinforcement Learning, and the importance of qualitative assessment (e.g. via user feedback).
Design Opportunities in HCAI
- Control and Autonomy
- Context
- Explainability
- Individual vs Population
Note that Safety is conspicuously absent. Humans kinda suck: they think that the AI will give them useless answers at worst (not harmful ones).